perm filename LIGHT.COM[2,JMC] blob sn#090956 filedate 1974-03-13 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	\\M0NGR25\M1BDI25\F0
C00026 ENDMK
C⊗;
\\M0NGR25;\M1BDI25;\F0
Draft for comments: This paper is in the file LIGHT.COM[2,JMC]@SAIL
and includes nit-picking comments by Bruce Anderson, who liked it.

***** 				  everyone else calls us SU-AI ↑

It was solicited as a review by the AI Journal.

Artificial  Intelligence: A General  Survey by
Professor  Sir  James Lighthill,  FRS, in  \F1Artificial  Intelligence: a
paper symposium\F0, Science Research Council 1973

\J	Professor   Lighthill  of  Cambridge   University  is   a  famous
hydrodynamicist  with a recent  interest in  applications to biology.
His review of  artificial intelligence  was at the  request of  Brian
Flowers, head of the Science  Research Council of Great Britain which
is the main funding body for British university research. Its purpose
was to help the Science Research Council decide  requests for support
of  work in AI.  Lighthill claims  no previous acquaintance  with the
field,   but refers  to a  large  number of  authors whose  works  he
consulted though not to any specific papers.

	Unfortunately,    workers  in  artificial  intelligence  lose
intellectual  contact with  Professor  Lighthill almost  immediately,
because he defines the field in such a way as to exclude our research
goals from consideration.   Namely, he begins by classifying  work in
artificial intelligence into  three categories A,  B and C.  A stands
for  applications  which  he  likes,  C  stands  for  connections  to
psychology and neurophysiology which he also likes,  and B stands for
"bridge" between the other two and also for "building robots" both of
which  he doesn't  like.   He states,  without admitting  that anyone
might disagree, that activities in B can be justified  only in so far
as they make a connection between A and C.

	This  classification   excludes  the  possibility   that  the
mechanisms  of   intelligent  behavior  can  be  studied  apart  from
applications and apart from realizations of these mechanisms in life.
However,  for almost  all workers  in the  field, the  whole idea  of
artificial  intelligence is  that the  relation between  problems and
problem solving methods and  the relation between situations  and the
behavior  that  will  achieve goals  can  be  studied  by theory  and
computer experiment as an independent subject.

	He makes no  argument for his classification,   and gives  no
hint that  anyone may think  differently.  This is  somewhat puzzling
since  a number of the documents submitted  by British AI workers for
his  consideration are  quite  explicit  about the  point.    Perhaps

***** this sentence isn't very clear
				↓

ignoring this claim plays  a tactical role in justifying his proposal
that research in robotics  be abandoned, because  if AI research  has
scientific problems of  its own, they  should be pursued  even though
the level  of funding may depend on the  prospects for results at the
present level of knowledge  and talent.   Whereas if the research  is
only  a  means toward  solving  some  other scientific  or  practical
problems,    then the  subject  may be  abandoned if  there  are more
promising ways of solving the other problems.

	Lighthill also uses the term \F1robot\F0 in a non-standard way
to refer to any computer program that is intended neither as a prototype
application nor as a model of human or animal behavior.  No-one has
bothered to define robotics in AI, but so far as I know the term is
only used for programs that interact physically with the world,
especially the the systems that use television cameras and either
an arm or a vehicle as an effector.

	Having ignored the possibility that AI has goals  of its own,
Lighthill goes on  to document his claim that  it has not contributed
to applications or to  psychology and physiology.   He exaggerates  a
bit here,   and one is inclined  to spend one's effort  disputing his
claims that AI has not contributed to these other subjects.

***** but one shouldn't!

	In  my opinion,  AI's  contribution to practical applications
has been significant but peripheral to the central ideas and problems
of AI.   Thus the LISP language for symbolic  computing was developed
for  AI use,   but has  had applications to  symbolic computations in
other areas, e.g.  physics.   Moreover, some ideas from LISP  such as
conditional expressions and  recursive function definitions have been
used in other programming  languages.  However,  the  ideas
that  have  been  applied  elsewhere  don't  have a  specifically  AI
character and might  have been  but weren't developed  without AI  in
mind.  Other  examples include time-sharing, the  first proposals for
which  had AI motivations  and some techniques  of picture processing
that were  first  developed in  AI laboratories  and  have been  used
elsewhere.  Even the  current work in automatic assembly using vision
could have been developed without AI  in mind.  The Dendral work  has
always  had a  specifically  AI character,  and  many of  the  recent
developments in  programming such as PLANNER and  CONNIVER have an AI
motivation.

***** Not at ALL clear that Planner and Conniver are advances in
programming of general utility.  I agree with Horning that most AI
programs are badly written, partly because of the many bad things
about Lisp.

	AI's  contributions to  neurophysiology  have been  small and
mostly of a negative character, i.e. showing that  certain mechanisms
that neurophysiologists propose are not well defined or inadequate to
carry  out the behavior they are supposed to  account for.  I have in
mind Hebb's proposals in his book \F1The  Organization of Behavior\F0.
No-one  today would believe  that the  gaps in  those ideas  could be
filled without adding something much  larger than the original  work.
Moreover, the  last 20  years experience  in programming machines  to
learn  and solve problems  makes it implausible  that cell assemblies
\F1per se\F0  would learn  much without  putting  in some  additional
organization, and  physiologists today  would be unlikely  to propose
such a theory.  However, merely showing that some things are unlikely
to work is not a \F1positive\F0 contribution.
I think there will be more interaction between AI and neurophysiology
as soon as the neurophysiologists are in a position to compare
information processing models of higher level functions with
physiological data.  There is little contact at the nerve cell level,
because, as Minsky showed in his PhD dissertation in 1954, almost any
of the proposed models of the neuron is a universal computing element,
so that there is no connection between the structure of the neuron and
what higher level processes are possible.

	On the other  hand,  the  effects of artificial  intelligence
research  on  psychology have  been  larger  as  attested by  various
psychologists. First of all, psychologist have begun to use models in
which  complex  internal  data structures  that  cannot  be  observed
directly  are attributed to  animals and people.   Psychologists have
come to use these models,  because they exhibit behavior  that cannot
be exhibited by models conforming  to the tenets of behaviorism which
essentially  allows  only connections  between  externally observable
variables.   Information processing  models in  psychology have  also
induced dissatisfaction  with psychoanalytic and  related theories of

***** ↑ I think the dissatisfaction is because these theories are
not formal/definite enough, not (necessarily) because they are
wrong. (Perhaps you agree - it isn't clear)

emotional behavior.  Namely,  these information processing models  of
emotional states  can  yield predictions  that can  be compared  with
experiment or experience in a reasonably objective way.

	Contributions  of AI to  psychology are  further discussed in
the paper  \F1Some Comments  on the  Lighthill Report\F0  by  N.   S.
Sutherland which  was included  in the same  book with  the Lighthill
report itself.

	Systematic  comment on  the main  section,   entitled \F1Past
Disappointments\F0  is  difficult because  of  the  strange  way  the
subject is divided up but here are some remarks:

	1. Automatic  landing systems for airplanes are  offered as a
field in  which conventional  engineering techniques  have been  more
successful than AI  methods.  Indeed, no-one would  advocate applying
the scene analysis or tree search techniques developed in AI research
to automatic landing  in the context in  which automatic landing  has
been developed.  Namely, radio signals are available to determine the
precise  position of  the airplane in  relation to  a straight runway
which is  guaranteed clear  of  interfering objects.   AI  techniques
would  be  necessary  to make  a  system  capable  of landing  on  an
unprepared dirt strip with no radio aids which had to be located  and
distinguished  from roads  visually  and  which  might have  cows  or
potholes or muddy places on it.  The problem of automatically driving
an automobile in an  uncontrolled environment is even more  difficult
and will  definitely require AI  techniques, which, however,  are not
nearly ready for a full solution of such a difficult problem.

	2.  Lighthill  is  disappointed  that  detailed  knowledge of
subject matter has to be put into programs that  are to be successful
in theorem proving, interpreting  mass spectra, and game playing.  He
uses the word \F1heuristics\F0  in a non-standard way  for this.   He
misses the fact that there are great  difficulties in finding ways of

***** ↑ surely what he misses is the fact that putting in the knowledge
is ESSENTIAL for both person and machine.  The fact that it is
difficult is separate.

representing knowledge of  the world in computer programs and much AI
research  and internal  controversy  is  directed  to  this  problem.
Moreover,  most  AI  researchers  feel that  more  progress  on  this
\F1representation problem\F0 is essential before substantial progress
can be made on the problem of automatic acquisition of knowledge.  Of
course, missing  these particular points is a  consequence of missing
the existence of  the AI  problem as distinct  from applications  and
study of the central nervous system.

	3. A  further disappointment is  that chess  playing programs
have only  reached an "experienced amateur" level  of play.  Well, if
programs can't do better than that  by 1978, I shall lose 250  pounds
and will  be disappointed  too though not  extremely surprised.   The
present  level of  computer chess  is based  on the  incorporation of
certain intellectual  mechanisms in the  programs.  Some  improvement
can be made by further  refinement of the heuristics in the programs,
but probably master  level chess  awaits the ability  to put  general
configuration patterns into the programs in an easy and flexible way.
I don't see how to set a date by which this problem must be solved in
order to avoid disappointment in the field of artificial intelligence
as a whole.

***** ↑ This para. is a bit odd - shouldn't you say why you AREN'T
disappointed?  I.e. because we now understand the problem better.

Did We Deserve It?

	Lighthill had  his shot  at AI and  missed, but  this doesn't
prove  that  everything in  AI  is ok.    In my  opinion,  present AI
research suffers  from some  major deficiencies apart  from the  fact
that  any scientists  would  achieve more  if they  were  smarter and
worked harder.

****** ( and had more PDP-10's, especially in the UK!!!!!!!!!)

	1. Much  work in  AI has  the "look  ma, no  hands"  disease.
Someone programs  a computer  to do  something no  computer has  done
before and writes a paper pointing out that the computer did it.  The
paper is not directed to the identification and study of intellectual
mechanisms and often contains no  coherent account of how the program
works  at all.  As an  example, consider  that the  SIGART Newsletter
prints the scores of the  games in the ACM Computer  Chess Tournament
just as though the programs were human players and their innards were
inaccessible.  We need to know why one program missed the right  move
in a position  - what was it thinking  about all that time?   We also
need  an  analysis of  what  class of  positions  the  particular one
belonged to and how a  future program might recognize this class  and
play better.

	2. Every now  and then, some AI scientist gets  an idea for a
general  scheme of  intelligent behavior that  can be  applied to any
problem provided the machine  is given the specific knowledge  that a
human has about  the domain.  Examples of this  have included the GPS
formalism, a simple predicate  calculus formalism, and more  recently
the  PLANNER  formalism  and  perhaps   the  current  Carnegie-Mellon
production formalism.  In the first and third  cases, the belief that
any problem solving  ability and knowledge could  be fitted into  the
formalisms led to published  predictions that computers would achieve

***** I haven't seen any!    ↑

certain  levels  of  performance  in certain  time  scales.    If the
inventors of  the formalisms  had been  right about  them, the  goals
might have  been achieved, but  regrettably they were  mistaken. Such
general purpose formalisms will be  discovered from time to time  and
most  likely an  adequate formalism  will  eventually be  discovered.
However, it would be  a great relief to the rest of the workers in AI
if the  discoverers of  new general  formalisms  would express  their
hopes in a more guarded form than has sometimes been the case.

	3. At present,  there does not exist  a comprehensive general
review of  AI that discusses all the main approaches and achievements
and issues.    Most likely,  this  is not  merely because  the  field

*****						↑
use of "merely" implies there are no first-rate scholars, but what you
say below is that even if there are, it isn't enough.  ( Thus, delete
the word.)

doesn't have a  first rate scholar at present,  but because the field
is confused about what these  approaches and achievements and  issues
are.   The production  of such  a review  will therefore  be a  major
creative work and not merely a work of scholarship.

	Even  if the  above-mentioned  deficiencies are  corrected, I
don't think AI  will avoid systematic  attacks of  a kind that  other
sciences  don't  suffer.  So  many   people  hope  that  human  level
artificial intelligence will never be achieved that attempts to prove
it are inevitable.   It seems likely, however, the  future attackers,
like  the  past ones,  will  find the  literature  of  the field  too
uncongenial to attempt to acquire detailed technical competence.\.

***** ↑↑ Exactly because (see your previous para!) understanding the
field requires MORE than scholarship!

				John McCarthy - 9 March 1974